Purpose, Process, Product

This group assignment provides practice in foreign exchange markets as well as R models of those markets. Specifically we will practice reading in data, exploring time series, estimating auto and cross correlations, and investigating volatility clustering in financial time series. We will summarize our experiences in debrief. We will pay special attention to the financial economics of exchange rates.

Part 1

In this set we will build and explore a data set using filters and if and diff statements. We will then answer some questions using plots and a pivot table report. We will then review a function to house our approach in case we would like to run some of the same analysis on other data sets.

Problem

Marketing and accounts receivables managers at our company continue to note we have a significant exposure to exchange rates. Our functional currency (what we report in financial statements) is in U.S. dollars (USD).

  • Our customer base is located in the United Kingdom, across the European Union, and in Japan. The exposure hits the gross revenue line of our financials.

  • Cash flow is further affected by the ebb and flow of accounts receivable components of working capital in producing and selling several products. When exchange rates are volatile, so is earnings, and more importantly, our cash flow.

  • Our company has also missed earnings forecasts for five straight quarters.

To get a handle on exchange rate exposures we download this data set and review some basic aspects of the exchange rates.

# Read in data
library(zoo)
## 
## Attaching package: 'zoo'
## The following objects are masked from 'package:base':
## 
##     as.Date, as.Date.numeric
library(xts)
library(ggplot2)
# Read and review a csv file from FRED
exrates <- na.omit(read.csv("exrates.csv", header = TRUE))
# Check the data
head(exrates)
##        DATE USD.EUR USD.GBP USD.CNY USD.JPY
## 1 1/28/2013  1.3459  1.5686  6.2240   90.73
## 2 1/29/2013  1.3484  1.5751  6.2259   90.65
## 3 1/30/2013  1.3564  1.5793  6.2204   91.05
## 4 1/31/2013  1.3584  1.5856  6.2186   91.28
## 5  2/1/2013  1.3692  1.5744  6.2265   92.54
## 6  2/4/2013  1.3527  1.5737  6.2326   92.57
tail(exrates)
##           DATE USD.EUR USD.GBP USD.CNY USD.JPY
## 1248 1/19/2018  1.2238  1.3857  6.3990  110.56
## 1249 1/22/2018  1.2230  1.3944  6.4035  111.15
## 1250 1/23/2018  1.2277  1.3968  6.4000  110.46
## 1251 1/24/2018  1.2390  1.4198  6.3650  109.15
## 1252 1/25/2018  1.2488  1.4264  6.3189  108.70
## 1253 1/26/2018  1.2422  1.4179  6.3199  108.38
str(exrates)
## 'data.frame':    1253 obs. of  5 variables:
##  $ DATE   : Factor w/ 1253 levels "1/10/2014","1/10/2017",..: 62 66 73 77 409 484 488 492 496 499 ...
##  $ USD.EUR: num  1.35 1.35 1.36 1.36 1.37 ...
##  $ USD.GBP: num  1.57 1.58 1.58 1.59 1.57 ...
##  $ USD.CNY: num  6.22 6.23 6.22 6.22 6.23 ...
##  $ USD.JPY: num  90.7 90.7 91 91.3 92.5 ...
# Begin to explore the data
summary(exrates)
##         DATE         USD.EUR         USD.GBP         USD.CNY     
##  1/10/2014:   1   Min.   :1.038   Min.   :1.212   Min.   :6.040  
##  1/10/2017:   1   1st Qu.:1.107   1st Qu.:1.324   1st Qu.:6.178  
##  1/10/2018:   1   Median :1.158   Median :1.514   Median :6.261  
##  1/11/2016:   1   Mean   :1.199   Mean   :1.474   Mean   :6.401  
##  1/11/2017:   1   3rd Qu.:1.314   3rd Qu.:1.573   3rd Qu.:6.627  
##  1/11/2018:   1   Max.   :1.393   Max.   :1.716   Max.   :6.958  
##  (Other)  :1247                                                  
##     USD.JPY      
##  Min.   : 90.65  
##  1st Qu.:102.14  
##  Median :109.88  
##  Mean   :109.33  
##  3rd Qu.:116.76  
##  Max.   :125.58  
## 

Questions

  1. What is the nature of exchange rates in general and in particular for this data set? We want to reflect the ups and downs of rate movements, known to managers as currency appreciation and depreciation.
  • We will calculate percentage changes as log returns of currency pairs. Our interest is in the ups and downs. To look at that we use if and else statements to define a new column called direction. We will build a data frame to house this initial analysis.

  • Using this data frame, interpret appreciation and depreciation in terms of the impact on the receipt of cash flow from customer’s accounts that are denominated in other than our USD functional currency.

# Compute log differences percent using as.matrix to force numeric type
exrates.r <- diff(log(as.matrix(exrates[, -1]))) * 100
head(exrates.r)
##      USD.EUR     USD.GBP     USD.CNY     USD.JPY
## 2  0.1855770  0.41352605  0.03052233 -0.08821260
## 3  0.5915427  0.26629486 -0.08837968  0.44028690
## 4  0.1473405  0.39811737 -0.02894123  0.25228994
## 5  0.7919091 -0.70886373  0.12695761  1.37092779
## 6 -1.2124033 -0.04447127  0.09792040  0.03241316
## 7  0.3100091 -0.54159233 -0.05456676  0.82836254
tail(exrates.r)
##          USD.EUR    USD.GBP     USD.CNY    USD.JPY
## 1248  0.00000000 -0.2306640 -0.28869056 -0.2890175
## 1249 -0.06539153  0.6258788  0.07029877  0.5322280
## 1250  0.38356435  0.1719691 -0.05467255 -0.6227176
## 1251  0.91621024  1.6332111 -0.54837584 -1.1930381
## 1252  0.78784876  0.4637771 -0.72690896 -0.4131289
## 1253 -0.52990891 -0.5976884  0.01582429 -0.2948224
str(exrates.r)
##  num [1:1252, 1:4] 0.186 0.592 0.147 0.792 -1.212 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : chr [1:1252] "2" "3" "4" "5" ...
##   ..$ : chr [1:4] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY"
# Create size and direction
size <- na.omit(abs(exrates.r)) # size is indicator of volatility
head(size)
##     USD.EUR    USD.GBP    USD.CNY    USD.JPY
## 2 0.1855770 0.41352605 0.03052233 0.08821260
## 3 0.5915427 0.26629486 0.08837968 0.44028690
## 4 0.1473405 0.39811737 0.02894123 0.25228994
## 5 0.7919091 0.70886373 0.12695761 1.37092779
## 6 1.2124033 0.04447127 0.09792040 0.03241316
## 7 0.3100091 0.54159233 0.05456676 0.82836254
colnames(size) <- paste(colnames(size),".size", sep = "") # Teetor
direction <- ifelse(exrates.r > 0, 1, ifelse(exrates.r < 0, -1, 0)) # another indicator of volatility
colnames(direction) <- paste(colnames(direction),".dir", sep = "")
head(direction)
##   USD.EUR.dir USD.GBP.dir USD.CNY.dir USD.JPY.dir
## 2           1           1           1          -1
## 3           1           1          -1           1
## 4           1           1          -1           1
## 5           1          -1           1           1
## 6          -1          -1           1           1
## 7           1          -1          -1           1
# Convert into a time series object: 
# 1. Split into date and rates
dates <- as.Date(exrates$DATE[-1], "%m/%d/%Y")
values <- cbind(exrates.r, size, direction)
# for dplyr pivoting we need a data frame
exrates.df <- data.frame(dates = dates, returns = exrates.r, size = size, direction = direction)
str(exrates.df) # notice the returns.* and direction.* prefixes
## 'data.frame':    1252 obs. of  13 variables:
##  $ dates                : Date, format: "2013-01-29" "2013-01-30" ...
##  $ returns.USD.EUR      : num  0.186 0.592 0.147 0.792 -1.212 ...
##  $ returns.USD.GBP      : num  0.4135 0.2663 0.3981 -0.7089 -0.0445 ...
##  $ returns.USD.CNY      : num  0.0305 -0.0884 -0.0289 0.127 0.0979 ...
##  $ returns.USD.JPY      : num  -0.0882 0.4403 0.2523 1.3709 0.0324 ...
##  $ size.USD.EUR.size    : num  0.186 0.592 0.147 0.792 1.212 ...
##  $ size.USD.GBP.size    : num  0.4135 0.2663 0.3981 0.7089 0.0445 ...
##  $ size.USD.CNY.size    : num  0.0305 0.0884 0.0289 0.127 0.0979 ...
##  $ size.USD.JPY.size    : num  0.0882 0.4403 0.2523 1.3709 0.0324 ...
##  $ direction.USD.EUR.dir: num  1 1 1 1 -1 1 -1 -1 -1 1 ...
##  $ direction.USD.GBP.dir: num  1 1 1 -1 -1 -1 1 1 1 -1 ...
##  $ direction.USD.CNY.dir: num  1 -1 -1 1 1 -1 1 1 0 0 ...
##  $ direction.USD.JPY.dir: num  -1 1 1 1 1 1 1 -1 -1 1 ...
# 2. Make an xts object with row names equal to the dates
exrates.xts <- na.omit(as.xts(values, dates)) #order.by=as.Date(dates, "%d/%m/%Y")))
str(exrates.xts)
## An 'xts' object on 2013-01-29/2018-01-26 containing:
##   Data: num [1:1252, 1:12] 0.186 0.592 0.147 0.792 -1.212 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : NULL
##   ..$ : chr [1:12] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY" ...
##   Indexed by objects of class: [Date] TZ: UTC
##   xts Attributes:  
##  NULL
exrates.zr <- na.omit(as.zooreg(exrates.xts))
str(exrates.zr)
## 'zooreg' series from 2013-01-29 to 2018-01-26
##   Data: num [1:1252, 1:12] 0.186 0.592 0.147 0.792 -1.212 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : NULL
##   ..$ : chr [1:12] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY" ...
##   Index:  Date[1:1252], format: "2013-01-29" "2013-01-30" "2013-01-31" "2013-02-01" "2013-02-04" ...
##   Frequency: 1
head(exrates.xts)
##               USD.EUR     USD.GBP     USD.CNY     USD.JPY USD.EUR.size
## 2013-01-29  0.1855770  0.41352605  0.03052233 -0.08821260    0.1855770
## 2013-01-30  0.5915427  0.26629486 -0.08837968  0.44028690    0.5915427
## 2013-01-31  0.1473405  0.39811737 -0.02894123  0.25228994    0.1473405
## 2013-02-01  0.7919091 -0.70886373  0.12695761  1.37092779    0.7919091
## 2013-02-04 -1.2124033 -0.04447127  0.09792040  0.03241316    1.2124033
## 2013-02-05  0.3100091 -0.54159233 -0.05456676  0.82836254    0.3100091
##            USD.GBP.size USD.CNY.size USD.JPY.size USD.EUR.dir USD.GBP.dir
## 2013-01-29   0.41352605   0.03052233   0.08821260           1           1
## 2013-01-30   0.26629486   0.08837968   0.44028690           1           1
## 2013-01-31   0.39811737   0.02894123   0.25228994           1           1
## 2013-02-01   0.70886373   0.12695761   1.37092779           1          -1
## 2013-02-04   0.04447127   0.09792040   0.03241316          -1          -1
## 2013-02-05   0.54159233   0.05456676   0.82836254           1          -1
##            USD.CNY.dir USD.JPY.dir
## 2013-01-29           1          -1
## 2013-01-30          -1           1
## 2013-01-31          -1           1
## 2013-02-01           1           1
## 2013-02-04           1           1
## 2013-02-05          -1           1

We can plot with the ggplot2 package. In the ggplot statements we use aes, “aesthetics”, to pick x (horizontal) and y (vertical) axes. Use group =1 to ensure that all data is plotted. The added (+) geom_line is the geometrical method that builds the line plot.

library(ggplot2)
library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
title.chg <- "Exchange Rate Percent Changes"
p1 <- autoplot.zoo(exrates.xts[,1:4]) + ggtitle(title.chg) + ylim(-5, 5)
p2 <- autoplot.zoo(exrates.xts[,5:8]) + ggtitle(title.chg) + ylim(-5, 5)
ggplotly(p1)
  1. Let’s dig deeper and compute mean, standard deviation, etc. Load the data_moments() function. Run the function using the exrates data and write a knitr::kable() report.
acf(coredata(exrates.xts[ , 1:4])) # returns

acf(coredata(exrates.xts[ , 5:7])) # sizes

pacf(coredata(exrates.xts[ , 1:4])) # returns

pacf(coredata(exrates.xts[ , 5:7])) # sizes

# Load the data_moments() function
## data_moments function
## INPUTS: r vector
## OUTPUTS: list of scalars (mean, sd, median, skewness, kurtosis)
data_moments <- function(data){
  library(moments)
  library(matrixStats)
  mean.r <- colMeans(data)
  median.r <- colMedians(data)
  sd.r <- colSds(data)
  IQR.r <- colIQRs(data)
  skewness.r <- skewness(data)
  kurtosis.r <- kurtosis(data)
  result <- data.frame(mean = mean.r, median = median.r, std_dev = sd.r, IQR = IQR.r, skewness = skewness.r, kurtosis = kurtosis.r)
  return(result)
}
# Run data_moments()
answer <- data_moments(exrates.xts[, 5:8])
# Build pretty table
answer <- round(answer, 4)
knitr::kable(answer)
mean median std_dev IQR skewness kurtosis
USD.EUR.size 0.4003 0.2935 0.3695 0.4313 1.7944 8.0424
USD.GBP.size 0.4008 0.2995 0.4266 0.4173 6.0881 93.5604
USD.CNY.size 0.1027 0.0601 0.1375 0.1154 3.9004 31.0222
USD.JPY.size 0.4533 0.3250 0.4455 0.4684 2.2201 10.4898
mean(exrates.xts[,4])
## [1] 0.01419772

Part 2

We will use the data from the first part to investigate the interactions of the distribution of exchange rates.

Problem

We want to characterize the distribution of up and down movements visually. Also we would like to repeat the analysis periodically for inclusion in management reports.

Questions

  1. How can we show the shape of our exposure to euros, especially given our tolerance for risk? Suppose corporate policy set tolerance at 95%. Let’s use the exrates.df data frame with ggplot2 and the cumulative relative frequency function stat_ecdf.
exrates.tol.pct <- 0.95
exrates.tol <- quantile(exrates.df$returns.USD.EUR, exrates.tol.pct)
exrates.tol.label <- paste("Tolerable Rate = ", round(exrates.tol, 2), "%", sep = "")
p <- ggplot(exrates.df, aes(returns.USD.EUR, fill = direction.USD.EUR.dir)) + stat_ecdf(colour = "blue", size = 0.75) + geom_vline(xintercept = exrates.tol, colour = "red", size = 1.5) + annotate("text", x = exrates.tol + 1 , y = 0.75, label = exrates.tol.label, colour = "darkred")
p

  1. What is the history of correlations in the exchange rate markets? If this is a “history,” then we have to manage the risk that conducting business in one country will definitely affect business in another. Further that bad things will be followed by more bad things more often than good things. We will create a rolling correlation function, corr_rolling, and embed this function into the rollapply() function (look this one up!).
one <- ts(exrates.df$returns.USD.EUR)
two <- ts(exrates.df$returns.USD.GBP)
# or
one <- ts(exrates.zr[,1])
two <- ts(exrates.zr[,2])
ccf(one, two, main = "GBP vs. EUR", lag.max = 20, xlab = "", ylab = "", ci.col = "red")

# build function to repeat these routines
run_ccf <- function(one, two, main = "one vs. two", lag = 20, color = "red"){
  # one and two are equal length series
  # main is title
  # lag is number of lags in cross-correlation
  # color is color of dashed confidence interval bounds
  stopifnot(length(one) == length(two))
  one <- ts(one)
  two <- ts(two)
  main <- main
  lag <- lag
  color <- color
  ccf(one, two, main = main, lag.max = lag, xlab = "", ylab = "", ci.col = color)
  #end run_ccf
}
one <- ts(exrates.df$returns.USD.EUR)
two <- ts(exrates.df$returns.USD.GBP)
# or
one <- exrates.zr[,1]
two <- exrates.zr[,2]
title <- "EUR vs. GBP"
run_ccf(one, two, main = title, lag = 20, color = "red")

# now for volatility (sizes)
one <- ts(abs(exrates.zr[,1]))
two <- ts(abs(exrates.zr[,2]))
title <- "EUR vs. GBP: volatility"
run_ccf(one, two, main = title, lag = 20, color = "red")

# We see some small raw correlations across time with raw returns. More revealing, we see volatility of correlation clustering using return sizes. 

One more experiment, rolling correlations and volatilities using these functions:

corr_rolling <- function(x) {   
  dim <- ncol(x)    
  corr_r <- cor(x)[lower.tri(diag(dim), diag = FALSE)]  
  return(corr_r)    
}
vol_rolling <- function(x){
  library(matrixStats)
  vol_r <- colSds(x)
  return(vol_r)
}
ALL.r <- exrates.xts[, 1:4]
window <- 90 #reactive({input$window})
corr_r <- rollapply(ALL.r, width = window, corr_rolling, align = "right", by.column = FALSE)
colnames(corr_r) <- c("EUR.GBP", "EUR.CNY", "EUR.JPY", "GBP.CNY", "GBP.JPY", "CNY.JPY")
vol_r <- rollapply(ALL.r, width = window, vol_rolling, align = "right", by.column = FALSE)
colnames(vol_r) <- c("EUR.vol", "GBP.vol", "CNY.vol", "JPY.vol")
year <- format(index(corr_r), "%Y")
r_corr_vol <- merge(ALL.r, corr_r, vol_r, year)
  1. How related are correlations and volatilities? Put another way, do we have to be concerned that inter-market transactions (e.g., customers and vendors transacting in more than one currency) can affect transactions in a single market? Let’s model the the exrate data to understand how correlations and volatilities depend upon one another.
library(quantreg)
## Loading required package: SparseM
## 
## Attaching package: 'SparseM'
## The following object is masked from 'package:base':
## 
##     backsolve
taus <- seq(.05,.95,.05)    # Roger Koenker UIC Bob Hogg and Allen Craig
fit.rq.CNY.JPY <- rq(log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)    
## Warning in log(CNY.JPY): NaNs produced
fit.lm.nickel.copper <- lm(log(CNY.JPY) ~ log(JPY.vol), data = r_corr_vol)  
## Warning in log(CNY.JPY): NaNs produced
# Some test statements  
CNY.JPY.summary <- summary(fit.rq.CNY.JPY, se = "boot")
CNY.JPY.summary
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.05
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -5.07014   0.45048  -11.25493   0.00000
## log(JPY.vol)  -0.96318   0.58635   -1.64267   0.10079
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.1
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.80541   0.29177  -13.04248   0.00000
## log(JPY.vol)  -0.05799   0.36457   -0.15906   0.87365
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.15
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.28519   0.15357  -21.39270   0.00000
## log(JPY.vol)   0.10332   0.28573    0.36161   0.71773
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.2
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.03994   0.07891  -38.52492   0.00000
## log(JPY.vol)  -0.10061   0.19218   -0.52352   0.60074
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.25
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.89330   0.08467  -34.17045   0.00000
## log(JPY.vol)  -0.35602   0.12768   -2.78838   0.00540
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.3
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.68656   0.11848  -22.67581   0.00000
## log(JPY.vol)  -0.28125   0.15202   -1.85002   0.06463
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.35
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.45886   0.08840  -27.81456   0.00000
## log(JPY.vol)  -0.10791   0.12478   -0.86481   0.38737
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.4
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.34030   0.11696  -20.00984   0.00000
## log(JPY.vol)  -0.06848   0.13378   -0.51185   0.60887
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.45
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.08348   0.07514  -27.72642   0.00000
## log(JPY.vol)   0.08756   0.08291    1.05599   0.29124
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.5
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.91646   0.06022  -31.82655   0.00000
## log(JPY.vol)   0.18794   0.06200    3.03144   0.00250
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.55
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.77384   0.06644  -26.69722   0.00000
## log(JPY.vol)   0.26318   0.08699    3.02549   0.00255
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.6
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -1.63052  0.23201   -7.02764  0.00000
## log(JPY.vol)  0.25338  0.22871    1.10785  0.26821
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.65
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.98780  0.14750   -6.69692  0.00000
## log(JPY.vol)  0.67504  0.19847    3.40127  0.00070
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.7
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.62254  0.21886   -2.84444  0.00455
## log(JPY.vol)  0.78950  0.29759    2.65297  0.00811
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.75
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.38695  0.07957   -4.86276  0.00000
## log(JPY.vol)  0.96979  0.14393    6.73772  0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.8
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.39138  0.04227   -9.25996  0.00000
## log(JPY.vol)  0.83000  0.13053    6.35891  0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.85
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -0.37793   0.03448  -10.96198   0.00000
## log(JPY.vol)   0.59616   0.07478    7.97251   0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.9
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -0.35178   0.01789  -19.66668   0.00000
## log(JPY.vol)   0.55875   0.05326   10.49084   0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.95
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -0.35746   0.00549  -65.09312   0.00000
## log(JPY.vol)   0.44931   0.01371   32.76940   0.00000
plot(CNY.JPY.summary)
## Warning in log(CNY.JPY): NaNs produced

#

Here is the quantile regression part of the package.

  1. We set taus as the quantiles of interest.
  2. We run the quantile regression using the quantreg package and a call to the rq function.
  3. We can overlay the quantile regression results onto the standard linear model regression.
  4. We can sensitize our analysis with the range of upper and lower bounds on the parameter estimates of the relationship between correlation and volatility.
  5. The log()-log() transformation allows us to interpret the regression coefficients as elasticities, which vary with the quantile. The larger the elasticity, especially if the absolute value is greater than one, the more risk dependence one market has on the other.
  6. The risk relationships can also be viewed year by year. Here we see very different patterns and scenarios.
library(quantreg)
library(magick)
## Linking to ImageMagick 6.9.9.14
## Enabled features: cairo, freetype, fftw, ghostscript, lcms, pango, rsvg, webp
## Disabled features: fontconfig, x11
img <- image_graph(res = 96)
datalist <- split(r_corr_vol, r_corr_vol$year)
out <- lapply(datalist, function(data){
  p <- ggplot(data, aes(JPY.vol, CNY.JPY)) +
    geom_point() + 
    ggtitle(data$year) + 
    geom_quantile(quantiles = c(0.05, 0.95)) + 
    geom_quantile(quantiles = 0.5, linetype = "longdash") +
    geom_density_2d(colour = "red")  
  print(p)
})
## Warning: Removed 89 rows containing non-finite values (stat_quantile).
## Smoothing formula not specified. Using: y ~ x
## Warning: Removed 89 rows containing non-finite values (stat_quantile).
## Smoothing formula not specified. Using: y ~ x
## Warning: Removed 89 rows containing non-finite values (stat_density2d).
## Warning: Removed 89 rows containing missing values (geom_point).
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
while (!is.null(dev.list()))  dev.off()
#img <- image_background(image_trim(img), 'white')
animation <- image_animate(img, fps = .5)
animation   

2. List in the text the ‘R’ skills needed to complete this project.

We used a project called “zoo” to create a time series object and examine the exchange rates over time. Additionally, we used acf(), autocorrelation function and pacf(), partial autocorrelation function to show how persistent the returns are over time.

3. Explain each of the functions (e.g., ggplot()) used to compute and visualize results.

First of all, we pass the data into diff(log(as.matrix())) to compute log differences percent using as.matrix to force numeric type. This will show the percentage changes as log returns of currency pairs. Then we use abs() to indicate the size of the percentage changes as the indicator of volatility. Another indicator of volatility is the direction of the percentage changes which could be accomplished by ifelse(). We further combine these indicators and build another time series object. We plot the time series object using ggplot2() and plotly(). Plotly() is a visualization tool that allows the users to interact with the graph. To plot a time series, we use autoplot.zoo(). We use aes() and pick x = all rows in the times series object that we created, and y = show the percentage changes as log returns of currency pairs or the direction of the percentage changes. Then we examine the persistence of returns using acf() and pacf(). Before we do that, we use coredata() to strip the index/time attribute and return the observation only. We perform data_moments() to get the basic statistic to help us understand the basic statistic of the data especially kurtosis.

In part 2, we plot the shape of our exposure to euros when corporate policy set tolerance at 95%. We use ggplot() and apply data=exrates.df with aes(), x=returns.USD.EUR, fill=direction.USD.EUR.dir. We add the cumulative relative frequency function stat_ecdf() and geom_vline(). We then look into the history of correlations in the exchange rate markets. We apply crosscorrelation function, ccf(). ccf() in this project computes two times series, “one = time series of returns.USD.EUR” and “two = time series of returns.USD.GBP” crosscorrelation up to lag 20. We can examine the crosscorrelation of the volatility by adding abs() into ccf(). Furthermore, we experiment rolling correlations and volatilities. We create a rolling correlation function, corr_rolling() and embed this function into the rollapply() function. rollaplly() is a generic function for applying a function to rolling margins of an array. It takes a dataframe as the first parameter, width as second parameter, function (FUN) as the third parameter (we use corr_rolling()), align and by.column as fourth and fifth parameters. Lastly, we investigate the relationship in between correlations and volatilities. We set taus as the quantiles of interest. We then run quantile regression, rq(). We overlay the quantile regression results onto the standard linear model regression. We then sensitize our analysis with the range of upper and lower bounds on the parameter estimates of the relationship between correlation and volatility. The log()-log() transformation allows us to interpret the regression coefficients as elasticities, which vary with the quantile. The larger the elasticity, especially if the absolute value is greater than one, the more risk dependence one market has on the other. The risk relationships can also be viewed year by year.

4. Discuss how well did the results begin to answer the business questions posed at the beginning of each part of the project.

The data did indicate that volatility of exchange rates had a significant correlation to the cash flow of accounts. The data tells us the general nature of the data set by showing us the appreciation and depreciation of currency. We use the changes in the size and exchange rate percentages over the time of 2013-2018 in order to combine the data into a matrix that shows the mean, median, standard deviation, IQR, skewness, and Kurtosis for each of the currency’s transactions with the USD. The data also was able to show the exposure to other currency such as euros using a set tolerance of 95% and the cumulative relative frequency function to break down the rates into tolerable rate constraints for the exposure. The data then is expanded over time using rolling correlation function embedded into the rolling apply function. We get great charts that show us the constraints or limits in the rate shown in red dotted lines and then black straight lines that represent the return percentage. The lines that fall outside of the red limits show that there are some small raw correlations across time with some of the raw return data from both charts. The data then helps show us the estimation of the relationship between correlation and volatility that helps show risk in terms of elasticity of two different markets. When the value is greater than 1 the more risk one market has on the other. This would help an accounts receivable manager had better determine the risk and exposure they have to making multiple transactions in different currencies and their impact on the overall risk of loss in each transaction. The parameters they select can help them determine at what exchange rate and what volume a transaction can expose them to more risk and more loss. This should help them cut losses from exchanges and determine a more effective volume in which to execute transactions to help their customers’ accounts improve their cash flow.